AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Generates a prediction for the observation using the specified ML Model
.
Note: Not all response parameters will be populated. Whether a response parameter is populated depends on the type of model requested.
For .NET Core this operation is only available in asynchronous form. Please refer to PredictAsync.
Namespace: Amazon.MachineLearning
Assembly: AWSSDK.MachineLearning.dll
Version: 3.x.y.z
public abstract PredictResponse Predict( String mlModelId, String predictEndpoint, Dictionary<String, String> record )
A unique identifier of the MLModel.
A property of PredictRequest used to execute the Predict service method.
A property of PredictRequest used to execute the Predict service method.
Exception | Condition |
---|---|
InternalServerException | An error on the server occurred when trying to process a request. |
InvalidInputException | An error on the client occurred. Typically, the cause is an invalid input value. |
LimitExceededException | The subscriber exceeded the maximum number of operations. This exception can occur when listing objects such as DataSource. |
PredictorNotMountedException | The exception is thrown when a predict request is made to an unmounted MLModel. |
ResourceNotFoundException | A specified resource cannot be located. |
.NET Framework:
Supported in: 4.5 and newer, 3.5